14 research outputs found

    Trust-based model for privacy control in context aware systems

    Get PDF
    In context-aware systems, there is a high demand on providing privacy solutions to users when they are interacting and exchanging personal information. Privacy in this context encompasses reasoning about trust and risk involved in interactions between users. Trust, therefore, controls the amount of information that can be revealed, and risk analysis allows us to evaluate the expected benefit that would motivate users to participate in these interactions. In this paper, we propose a trust-based model for privacy control in context-aware systems based on incorporating trust and risk. Through this approach, it is clear how to reason about trust and risk in designing and implementing context-aware systems that provide mechanisms to protect users' privacy. Our approach also includes experiential learning mechanisms from past observations in reaching better decisions in future interactions. The outlined model in this paper serves as an attempt to solve the concerns of privacy control in context-aware systems. To validate this model, we are currently applying it on a context-aware system that tracks users' location. We hope to report on the performance evaluation and the experience of implementation in the near future

    An approach to rollback recovery of collaborating mobile agents

    Get PDF
    Fault-tolerance is one of the main problems that must be resolved to improve the adoption of the agents' computing paradigm. In this paper, we analyse the execution model of agent platforms and the significance of the faults affecting their constituent components on the reliable execution of agent-based applications, in order to develop a pragmatic framework for agent systems fault-tolerance. The developed framework deploys a communication-pairs independent check pointing strategy to offer a low-cost, application-transparent model for reliable agent- based computing that covers all possible faults that might invalidate reliable agent execution, migration and communication and maintains the exactly-one execution property

    On trust and privacy in context-aware systems

    Get PDF
    Recent advances in networking, handheld computing and sensors technologies have led to the emergence of context-aware systems. The vast amounts of personal information collected by such systems has led to growing concerns about the privacy of their users. Users concerned about their private information are likely to refuse participation in such systems. Therefore, it is quite clear that for any context-aware system to be acceptable by the users, mechanisms for controlling access to personal information are a necessity. According to Alan Westin "privacy is the claim of individuals, groups, or institutions to determine for themselves when, how and to what extent information is communicated to others"1. Within this context we can classify users as either information owners or information receivers. It is also acknowledged that information owners are willing to disclose personal information if this disclosure is potentially beneficial. So, the acceptance of any context-aware system depends on the provision of mechanisms for fine-grained control of the disclosure of personal information incorporating an explicit notion of benefit

    Privacy, security, and trust issues in smart environments

    Get PDF
    Recent advances in networking, handheld computing and sensor technologies have driven forward research towards the realisation of Mark Weiser's dream of calm and ubiquitous computing (variously called pervasive computing, ambient computing, active spaces, the disappearing computer or context-aware computing). In turn, this has led to the emergence of smart environments as one significant facet of research in this domain. A smart environment, or space, is a region of the real world that is extensively equipped with sensors, actuators and computing components [1]. In effect the smart space becomes a part of a larger information system: with all actions within the space potentially affecting the underlying computer applications, which may themselves affect the space through the actuators. Such smart environments have tremendous potential within many application areas to improve the utility of a space. Consider the potential offered by a smart environment that prolongs the time an elderly or infirm person can live an independent life or the potential offered by a smart environment that supports vicarious learning

    The SECURE collaboration model

    Get PDF
    The SECURE project has shown how trust can be made computationally tractable while retaining a reasonable connection with human and social notions of trust. SECURE has produced a well-founded theory of trust that has been tested and refined through use in real software such as collaborative spam filtering and electronic purse. The software comprises the SECURE kernel with extensions for policy specification by application developers. It has yet to be applied to large-scale, multi-domain distributed systems taking different application contexts into account. The project has not considered privacy in evidence distribution, a crucial issue for many application domains, including public services such as healthcare and police. The SECURE collaboration model has similarities with the trust domain concept, embodying the interaction set of a principal, but SECURE is primarily concerned with pseudonymous entities rather than domain-structured systems

    Towards Dynamic Security Perimeters for Virtual Collaborative Networks

    No full text
    Trust management seems a promising approach for dealing with security concerns in collaborative applications in a global computing environment. However, the characteristics of this environment require a move from reliable identification to mechanisms for the recognition of entities. Furthermore, they require explicit reasoning about the risks of interactions, and a notion of uncertainty in the underlying trust model. From our experience of engineering collaborative applications in such an environment, we found that the relationship between trust and risk is a fundamental issue. In this paper, as an initial step towards an engineering approach for the development of trust based collaborative applications, we focus on the relationship between trust and risk, and explore alternative views of this relationship. We also exemplify how particular views can be exploited in two particular application scenarios. This paper builds upon our previous work in developing a general model for trust based collaborations
    corecore